Insights from AWS re:Inforce「Secure your healthcare Generative AI workloads on Amazon EKS」#DAP221 #AWSreInforce

Insights from AWS re:Inforce「Secure your healthcare Generative AI workloads on Amazon EKS」#DAP221 #AWSreInforce

Clock Icon2024.06.14

Introduction

Hello, this is Hemanth from the Department of Alliance. I'll be discussing about the AWS re:Inforce 2024 session on "Securing Healthcare Generative AI Workloads on Amazon EKS" in this blog. This workshop offered insightful tactics and useful advice to improve the security of generative AI applications in the medical field.

Check out the Full Session Video below

Session Speakers

Sai Charan Teja Gopaluni, a senior specialist solutions architect for AWS Container Services, and Antoinette Mills, a specialist senior sales representative for application modernization at AWS, led the event with their vast knowledge in AI and cloud computing, both presenters offered practical insights.

Agenda Highlights

  • Use Cases of Generative AI in Healthcare
  • Analyzing Vulnerability Landscapes for Generative AI Workloads
  • In-Depth Use-Case Walkthrough on Amazon EKS
  • Key Takeaways

Generative AI in Healthcare – Use Cases

By improving patient interactions, automating clinical paperwork, and democratizing data access, generative AI is revolutionizing the healthcare industry. Important use cases consist of:

Clinical Note Generation: Automatically generate clinical notes from patient conversations using AWS HealthGrad. SageMaker facilitates the integration and fine-tuning of foundational models like Hugging Face.

AI-Powered Q&A Chatbots: Retrieval-augmented generation is used by Amazon Pharmacy to build effective Q&A chatbots that streamline patient interactions.

Data Democratization: Open repositories enable access to vast imaging datasets, which can be analyzed using machine learning models to gain valuable insights.

94% of executives prioritize safeguarding AI technologies before deploying them, highlighting the crucial requirement for strong security measures, according to a recent survey.

Generative AI Vulnerability Landscape – STRIDE Model

Spoofing

Implications

Modified Treatment Suggestions: Deceptive treatment recommendations produced by AI using fictitious data.

Research Outcomes Compromised: Research outcomes that have been skewed by manipulating data sources.

Mitigations

Robust Identity and Access Management: To guarantee that only authorised personnel can view and alter sensitive medical data, enforce stringent role-based access controls (RBAC) and strong identity verification procedures.

Infrastructure Security: Uphold strict procedures for infrastructure security, such as frequent audits, data encryption both in transit and at rest, and always watchful eyes for any attempts at unauthorized access.

Tampering

Implications

Inaccurate Diagnosis: Patients may suffer as a result of erroneous medical diagnosis made by compromised AI models.

Unfair Research Results: The validity and reliability of medical studies might be negatively impacted by skewed study findings resulting from altered models and data.

Mitigations

Shift-Left Approach to Adherence to Regulations: To find and fix vulnerabilities before to deployment, incorporate security and compliance tests early in the development process. This proactive strategy aids in guaranteeing adherence to norms and regulations in the healthcare industry.

Image Artifact Signing and Scanning: To confirm the authenticity and integrity of container images, use cryptographic signing.

Repudiation

Implications

Model stealing is the theft of intellectual property by giving hackers access to AI models without authorization so they may duplicate and abuse them.

Side-Channel Attacks: Methods that take use of unofficial conduits of information to breach patient privacy records.

Mitigations

Differential Privacy with Tenancy Strategies: To anonymize sensitive data and efficiently manage tenancy strategies, apply differential privacy approaches. By guaranteeing data security throughout cross-environment sharing, this method reduces the possibility of model theft.

Strict image security procedures should be put in place to guard against issues with inference, weaknesses in container security, and possible model theft. This entails running routine security checks and utilizing container images that have been verified and signed.

Create and apply forensic runbooks to examine and separate containers. With the help of these runbooks, potential side-channel attacks may be systematically found and dealt with by looking for irregularities in container behavior and activity.

Information Disclosure

Implications

Differential Privacy with Tenancy Strategies: To anonymize sensitive data and efficiently manage tenancy strategies, apply differential privacy approaches. By guaranteeing data security throughout cross-environment sharing, this method reduces the possibility of model theft.

Strict image security procedures should be put in place to guard against issues with inference, weaknesses in container security, and possible model theft. This entails running routine security checks and utilizing container images that have been verified and signed.

Create and apply forensic runbooks to examine and separate containers. With the help of these runbooks, potential side-channel attacks may be systematically found and dealt with by looking for irregularities in container behavior and activity.

Mitigation

Pod Security Solutions: To enforce security standards at the pod level, implement pod security policies. In order to guarantee that only authorized pods have access to sensitive data, this involves defining security contexts, network policies, and role-based access control, or RBAC.

Encryption of Data in Transit and at Rest: To safeguard data during transmission as well as at rest, use strong encryption.

Kubernetes Secrets Management: Use AWS Secrets Manager with Kubernetes Secrets to safely handle sensitive data, including tokens, passwords, and API keys.

Denial of Service

Implications

Disrupted Healthcare Operations: Distributed denial of service (DoS) attacks have the potential to impair vital healthcare services by causing outages in vital systems like diagnostic tools and electronic health records (EHRs).

Patient Care is Compromised: DoS attacks can cause AI-driven apps and services to become unavailable, which can lead to delays in diagnosis and treatment. This can have a negative impact on the standard and results of patient care.

Mitigation

Network Security rules: To identify and lessen DoS attacks, implement strong network security rules. Traffic Monitoring: Keep an eye out for any odd patterns in network traffic that might point to a denial-of-service assault. By putting these mitigation techniques into practice, healthcare systems can withstand DoS attacks and continue to provide high-quality patient care and services.

Elevation of Privileges

Implications

Reputational Damage: Healthcare companies may suffer serious reputational harm as a result of unauthorized access to sensitive systems and data. In the healthcare industry, trust is essential, and any betrayal can damage patient confidence and corporate reputation.

Legal Repercussions: Obtaining enhanced access without authorization may expose one to legal responsibility and fines for breaking laws like HIPAA. There could be penalties, legal action, and certification revocation.

Mitigation

Controls for Runtime Monitoring: Implement runtime monitoring to identify and address questionable activity immediately.

Use investigative controls to find and reduce any hazards of privilege escalation before they are taken advantage of.

Create a thorough incident response plan in order to handle any possible security events involving privilege escalation.

Safe Workforce Mobility for a Security Model Based on Zero Trust: Adopt workforce mobility policies that are secure and compliant with the Zero Trust security model to guarantee that all access is regularly validated.

Healthcare companies can improve their security posture against privilege escalation risks, protect patient data, and remain compliant with regulations by incorporating these mitigation techniques.

Use-Case Walkthrough

Intelligent Health Assistant

Based on the Mistral paradigm, BioMistral-7B is an open-source Large Language paradigm (LLM) created especially for the biomedical field. The main characteristics and functions of BioMistral-7B are listed below:

Base Model: Designed to handle intricate biomedical questions and tasks, BioMistral-7B is based on Mistral-78-Instruct-v0.1.

Data Sources: The French National Centre for Scientific Research, or CNRS, provided training to further hone the system after it was pre-trained with textual data obtained from PubMed Central Open Access.

Integration with Transformers Library: Hugging Face's Transformers Library and BioMistral-7B work together smoothly to provide simple deployment and extension in AI applications.

Evaluation of Performance: BioMistral-7B performed well on a benchmark set of ten recognized medical question-answering (QA) activities in English. Additionally, it has multilingual capabilities and has been validated in seven other languages, making it more widely applicable in healthcare environments.

With its sophisticated AI capabilities, BioMistral-7B's specific design and demanding training program make it an effective instrument for creating intelligent health aides that can improve patient care and medical decision-making.

Need to start Architectural Pattern Utilize KubeRay for creating Ray clusters to scale workloads and Karpenter for the necessary infrastructure. Deploy models from Hugging Face, tokenize, and expose them through an API, such as a FastAPI Python app.

Tying it Back to the Vulnerability Landscape

Spoofing

At the Kubernetes level it has role based access control by using the role binded service accounts. Using which can you can say what your pod can do or cannot do. IAM provide similar capabilities. Tying to these 2 provide least previlage access. Also for constant availability using cloudtrail.

Tampering

Use Open Policy Agent (OPA) to ensure image is cryptographically signed using AWS signer. This is implemented by developers in end stage. To ensure images vulnerability free by using programatic package check and image series check use Inspector for these.

Repudiation

Using strong telemetry patterns. Have a logging driver for ingesting logs application or infrastructure logs into analytics platform. CloudTrail and CloudWatch offer similar functionalities with additional insights.

Information Disclosure

The data resides in persistent volumes i.e pv which is created by CSI drivers if these models need access to vector database then best to use secrets for encryption. Use KMS and Secret Manager that provide these capabilities. To mitigate information disclosure issue.

Deniel of service

To ensure that scope have network policies (netpot) and subsegmentation using soft multitencies with namespaces(ns) so that faster isolation during an attack. In AWS level, utilize AWS Shield and Firewall Manager at the VPC level for countermeasures.

Elevation of Privilege

This Vulnerability also goes in hand with authencity. Have mitigation measures set before time. Like resource limits (limits). So that pod can be isolated faster. AWS level runtime tools to counter such as GuradDuty to do run time monitoring of kubernetes. Config implement lambda functions to do automated remediation.

Key Takeaways

  • Use cases of generative AI in healthcare show notable gains in both operational effectiveness and patient care.

  • A methodical technique to locating and addressing vulnerabilities in AI workloads is offered by the STRIDE framework.

  • Workloads for inference can be deployed in a scalable and secure manner using the architectural patterns on Amazon EKS.

  • Use of AWS security best practices is necessary to put a thorough defense-in-depth plan into action.

Conclusion

Healthcare organizations have a strategic opportunity as well as a need to secure generative AI workloads. Organizations can fully utilize AI while maintaining compliance and data protection by incorporating the architectural patterns and security procedures that have been mentioned. Innovative and safe healthcare solutions will be made possible by the proactive adoption of these principles.

Share this article

facebook logohatena logotwitter logo

© Classmethod, Inc. All rights reserved.